|
In telecommunication and information theory, the code rate (or information rate〔Huffman, W. Cary, and Pless, Vera, ''Fundamentals of Error-Correcting Codes'', Cambridge, 2003.〕) of a forward error correction code is the proportion of the data-stream that is useful (non-redundant). That is, if the code rate is ''k/n'', for every ''k'' bits of useful information, the coder generates totally ''n'' bits of data, of which ''n-k'' are redundant. If ''R'' is the gross bitrate or data signalling rate (inclusive of redundant error coding), the net bitrate (the useful bit rate exclusive of error-correction codes) is ≤ ''R•k/n''. For example: The code rate of a convolutional code may typically be 1/2, 2/3, 3/4, 5/6, 7/8, etc., corresponding to that one redundant bit is inserted after every single, second, third, etc., bit. The code rate of the Reed Solomon block code denoted RS(204,188) is 188/204, corresponding to that 204 - 188 = 16 redundant bytes are added to each block of 188 bytes of useful information. A few error correction codes do not have a fixed code rate -- rateless erasure codes. Note that bit/s is a more widespread unit of measurement for the information rate, implying that it is synonymous to ''net bit rate'' or ''useful bit rate'' exclusive of error-correction codes. ==See also== * Information rate * Source information rate (Entropy rate) 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Code rate」の詳細全文を読む スポンサード リンク
|